Interoperability issue is a significant problem in Building Information Modeling (BIM). Object type, as a kind of critical semantic information needed in multiple BIM applications like scan-to-BIM and code compliance checking, also suffers when exchanging BIM data or creating models using software of other domains. It can be supplemented using deep learning. Current deep learning methods mainly learn from the shape information of BIM objects for classification, leaving relational information inherent in the BIM context unused. To address this issue, we introduce a two-branch geometric-relational deep learning framework. It boosts previous geometric classification methods with relational information. We also present a BIM object dataset IFCNet++, which contains both geometric and relational information about the objects. Experiments show that our framework can be flexibly adapted to different geometric methods. And relational features do act as a bonus to general geometric learning methods, obviously improving their classification performance, thus reducing the manual labor of checking models and improving the practical value of enriched BIM models.
translated by 谷歌翻译
Choosing the values of hyper-parameters in sparse Bayesian learning (SBL) can significantly impact performance. However, the hyper-parameters are normally tuned manually, which is often a difficult task. Most recently, effective automatic hyper-parameter tuning was achieved by using an empirical auto-tuner. In this work, we address the issue of hyper-parameter auto-tuning using neural network (NN)-based learning. Inspired by the empirical auto-tuner, we design and learn a NN-based auto-tuner, and show that considerable improvement in convergence rate and recovery performance can be achieved.
translated by 谷歌翻译
Code pre-trained models (CodePTMs) have recently demonstrated significant success in code intelligence. To interpret these models, some probing methods have been applied. However, these methods fail to consider the inherent characteristics of codes. In this paper, to address the problem, we propose a novel probing method CAT-probing to quantitatively interpret how CodePTMs attend code structure. We first denoise the input code sequences based on the token types pre-defined by the compilers to filter those tokens whose attention scores are too small. After that, we define a new metric CAT-score to measure the commonality between the token-level attention scores generated in CodePTMs and the pair-wise distances between corresponding AST nodes. The higher the CAT-score, the stronger the ability of CodePTMs to capture code structure. We conduct extensive experiments to integrate CAT-probing with representative CodePTMs for different programming languages. Experimental results show the effectiveness of CAT-probing in CodePTM interpretation. Our codes and data are publicly available at https://github.com/nchen909/CodeAttention.
translated by 谷歌翻译
Hazop是为揭示行业危害的安全范式,其报告涵盖了有价值的危害事件(HAE)。 HAE分类的研究具有许多不可替代的务实值。但是,没有研究对此主题如此关注。在本文中,我们提出了一种新颖的深度学习模型,称为DLF,从语言的角度通过分形方法探索HAE分类。动机是(1):HAE自然可以被视为一种时间序列; (2):HAE的含义是由单词排列驱动的。具体而言,首先我们采用bert来矢量化hae。然后,我们提出了一种称为HMF-DFA的新的多型方法,通过分析被视为时间序列的HAE矢量来计算HAE分形系列。最后,我们设计了一个新的分层门控神经网络(HGNN)来处理HAE分形系列以完成HAE的分类。我们进行了18个过程进行案例研究。我们根据他们的Hazop报告启动实验。实验结果表明,我们的DLF分类器令人满意和有前途,提出的HMF-DFA和HGNN有效,并且将语言分形引入HAE是可行的。我们的HAE分类系统可以为Hazop提供服务,并为专家,工程师,员工和其他企业带来应用激励措施,这有利于工业安全的智能发展。我们希望我们的研究能为工业安全和分形理论的日常实践提供更多支持。
translated by 谷歌翻译
Hazop可以将危害作为文本信息暴露,研究其分类对于工业信息学的发展具有重要意义,这有利于安全性预警,决策支持,政策评估等。但是,对这一重要的研究没有研究目前。在本文中,我们提出了一种通过深度学习危害分类来称为DLGM的新型模型。具体而言,首先,我们利用BERT将危险矢量化并将其视为时间序列(HTS)。其次,我们构建了一个灰色模型FSGM(1,1)来对其进行建模,并从结构参数的意义上获得灰色指导。最后,我们设计了一个层次 - 特征融合神经网络(HFFNN),以从三个主题中使用灰色指导(HTSGG)调查HTS,其中HFFNN是一种具有四种模块的层次结构:两种功能编码器,一个门控机制,和一个门控机制和一个模块。加深机制。我们将18个工业流程作为应用程序案例,并启动一系列实验。实验结果证明,DLGM有望成为危险分类的才能,FSGM(1,1)和HFFNN具有有效性。我们希望我们的研究能为工业安全的日常实践贡献价值和支持。
translated by 谷歌翻译
任务概括是自然语言处理(NLP)的漫长挑战。最近的研究试图通过将NLP任务映射到人类可读的提示形式中来提高预训练语言模型的任务概括能力。但是,这些方法需要费力且不灵活的提示,并且在同一下游任务上的不同提示可能会获得不稳定的性能。我们提出了统一的架构提示,这是一种灵活且可扩展的提示方法,该方法会根据任务输入架构自动自动自定义每个任务的可学习提示。它在任务之间建模共享知识,同时保持不同任务架构的特征,从而增强任务概括能力。架构提示采用每个任务的明确数据结构,以制定提示,因此涉及几乎没有人类的努力。为了测试模式提示的任务概括能力,我们对各种一般NLP任务进行基于模式提示的多任务预训练。该框架在从8种任务类型(例如QA,NLI等)的16个看不见的下游任务上实现了强劲的零射击和很少的概括性能。此外,全面的分析证明了每个组件在架构提示中的有效性,其在任务组成性方面的灵活性以及在全DATA微调设置下提高性能的能力。
translated by 谷歌翻译
现有的基于学习的框架插值算法从高速自然视频中提取连续帧以训练模型。与自然视频相比,卡通视频通常处于较低的框架速度。此外,连续卡通框架之间的运动通常是非线性,它破坏了插值算法的线性运动假设。因此,它不适合直接从卡通视频中生成训练集。为了更好地适应从自然视频到动画视频的框架插值算法,我们提出了Autofi,这是一种简单有效的方法,可以自动渲染训练数据,以进行深层动画视频插值。 Autofi采用分层体系结构来渲染合成数据,从而确保线性运动的假设。实验结果表明,Autofi在训练Dain和Anin方面表现出色。但是,大多数框架插值算法仍将在容易出错的区域(例如快速运动或大闭塞)中失败。除了Autofi外,我们还提出了一个名为SKTFI的基于插件的后处理后处理模块,以手动使用用户提供的草图来完善最终结果。借助Autofi和SKTFI,插值动画框架显示出很高的感知质量。
translated by 谷歌翻译
Question Answering (QA) is a longstanding challenge in natural language processing. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. ProQA takes a unified structural prompt as the bridge and improves the QA-centric ability by structural prompt-based pre-training. Through a structurally designed prompt-based input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. Furthermore, ProQA is pre-trained with structural prompt-formatted large-scale synthesized corpus, which empowers the model with the commonly-required QA ability. Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.
translated by 谷歌翻译
智能服务机器人需要能够在动态环境中执行各种任务。尽管在机器人抓住方面取得了重大进展,但机器人在非结构化的现实环境中给出不同的任务时,机器人可以决定掌握位置仍然是一项挑战。为了克服这一挑战,创建一个正确的知识表示框架是关键。与以前的工作不同,在本文中,任务定义为三联体,包括掌握工具,所需的动作和目标对象。我们所提出的算法给予(掌握 - Action-Target Embeddings和关系)模型掌握工具之间的关系 - 嵌入空间中的目标对象 - 目标对象。要验证我们的方法,为特定于任务的GRASPing创建了一种新型数据集。给予新数据集的培训,并实现特定于任务的掌握推理,以94.6%的成功率。最后,在真正的服务机器人平台上测试了等级算法的有效性。等级算法在人类行为预测和人机互动中具有潜力。
translated by 谷歌翻译
近年来,基于深度卷积神经网络(CNN)的细分方法已为许多医学分析任务做出了最先进的成就。但是,这些方法中的大多数通过优化结构或添加U-NET的新功能模块来改善性能,从而忽略了粗粒和细粒的语义信息的互补和融合。为了解决上述问题,我们提出了一个称为渐进学习网络​​(PL-NET)的医学图像分割框架,其中包括内部渐进式学习(IPL)和外部渐进学习(EPL)。 PL-NET具有以下优点:(1)IPL将特征提取为两个“步骤”,它们可以混合不同尺寸的接收场并捕获从粗粒度到细粒度的语义信息,而无需引入其他参数; (2)EPL将训练过程分为两个“阶段”以优化参数,并在上一阶段中实现粗粒信息的融合,并在后期阶段进行细粒度。我们在不同的医学图像分析任务中评估了我们的方法,结果表明,PL-NET的分割性能优于U-NET及其变体的最新方法。
translated by 谷歌翻译